Search Results

Documents authored by Zhu, Rui


Document
Automatically Discovering Conceptual Neighborhoods Using Machine Learning Methods

Authors: Ling Cai, Krzysztof Janowicz, and Rui Zhu

Published in: LIPIcs, Volume 240, 15th International Conference on Spatial Information Theory (COSIT 2022)


Abstract
Qualitative spatio-temporal reasoning (QSTR) plays a key role in spatial cognition and artificial intelligence (AI) research. In the past, research and applications of QSTR have often taken place in the context of declarative forms of knowledge representation. For instance, conceptual neighborhoods (CN) and composition tables (CT) of relations are introduced explicitly and utilized for spatial/temporal reasoning. Orthogonal to this line of study, we focus on bottom-up machine learning (ML) approaches to investigate QSTR. More specifically, we are interested in questions of whether similarities between qualitative relations can be learned from data purely based on ML models, and, if so, how these models differ from the ones studied by traditional approaches. To achieve this, we propose a graph-based approach to examine the similarity of relations by analyzing trained ML models. Using various experiments on synthetic data, we demonstrate that the relationships discovered by ML models are well-aligned with CN structures introduced in the (theoretical) literature, for both spatial and temporal reasoning. Noticeably, even with significantly limited qualitative information for training, ML models are still able to automatically construct neighborhood structures. Moreover, patterns of asymmetric similarities between relations are disclosed using such a data-driven approach. To the best of our knowledge, our work is the first to automatically discover CNs without any domain knowledge. Our results can be applied to discovering CNs of any set of jointly exhaustive and pairwise disjoint (JEPD) relations.

Cite as

Ling Cai, Krzysztof Janowicz, and Rui Zhu. Automatically Discovering Conceptual Neighborhoods Using Machine Learning Methods. In 15th International Conference on Spatial Information Theory (COSIT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 240, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.COSIT.2022.3,
  author =	{Cai, Ling and Janowicz, Krzysztof and Zhu, Rui},
  title =	{{Automatically Discovering Conceptual Neighborhoods Using Machine Learning Methods}},
  booktitle =	{15th International Conference on Spatial Information Theory (COSIT 2022)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-257-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{240},
  editor =	{Ishikawa, Toru and Fabrikant, Sara Irina and Winter, Stephan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2022.3},
  URN =		{urn:nbn:de:0030-drops-168884},
  doi =		{10.4230/LIPIcs.COSIT.2022.3},
  annote =	{Keywords: Qualitative Spatial Reasoning, Qualitative Temporal Reasoning, Conceptual Neighborhood, Machine Learning, Knowledge Discovery}
}
Document
xNet+SC: Classifying Places Based on Images by Incorporating Spatial Contexts

Authors: Bo Yan, Krzysztof Janowicz, Gengchen Mai, and Rui Zhu

Published in: LIPIcs, Volume 114, 10th International Conference on Geographic Information Science (GIScience 2018)


Abstract
With recent advancements in deep convolutional neural networks, researchers in geographic information science gained access to powerful models to address challenging problems such as extracting objects from satellite imagery. However, as the underlying techniques are essentially borrowed from other research fields, e.g., computer vision or machine translation, they are often not spatially explicit. In this paper, we demonstrate how utilizing the rich information embedded in spatial contexts (SC) can substantially improve the classification of place types from images of their facades and interiors. By experimenting with different types of spatial contexts, namely spatial relatedness, spatial co-location, and spatial sequence pattern, we improve the accuracy of state-of-the-art models such as ResNet - which are known to outperform humans on the ImageNet dataset - by over 40%. Our study raises awareness for leveraging spatial contexts and domain knowledge in general in advancing deep learning models, thereby also demonstrating that theory-driven and data-driven approaches are mutually beneficial.

Cite as

Bo Yan, Krzysztof Janowicz, Gengchen Mai, and Rui Zhu. xNet+SC: Classifying Places Based on Images by Incorporating Spatial Contexts. In 10th International Conference on Geographic Information Science (GIScience 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 114, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{yan_et_al:LIPIcs.GISCIENCE.2018.17,
  author =	{Yan, Bo and Janowicz, Krzysztof and Mai, Gengchen and Zhu, Rui},
  title =	{{xNet+SC: Classifying Places Based on Images by Incorporating Spatial Contexts}},
  booktitle =	{10th International Conference on Geographic Information Science (GIScience 2018)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-083-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{114},
  editor =	{Winter, Stephan and Griffin, Amy and Sester, Monika},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.GISCIENCE.2018.17},
  URN =		{urn:nbn:de:0030-drops-93450},
  doi =		{10.4230/LIPIcs.GISCIENCE.2018.17},
  annote =	{Keywords: Spatial context, Image classification, Place types, Convolutional neural network, Recurrent neural network}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail